Stochastic Minimization with Adaptive Memory

نویسنده

  • R. C. Gonzalez
چکیده

Fig. 12. Comparison between the number of functional evaluations with gaussian and exponential damping functions. The results on a complete sweep on 24 values of m in 0,1] are shown. Finally, we should note that the choice of gaussian damping for the memory mechanism is not critical. We experimented with a pure exponential damping getting similar results. This suggests that the dynamic mechanism used to maintain the necessary depth is suuciently strong to cope with the eeects deriving from the choice of diierent damping functions (see Fig. 12). A random search algorithm with adaptive memory has been presented which is characterized by the use of an adap-tive gaussian memory for biasing the exploration. The algorithm has been compared to its variant without memory and to traditional techniques in several minimization tasks. Of particular interest is the application to the computation of the minimum eigenvalue of a singular diierential operator on a Hilbert space for which traditional techniques perform badly. Acknowledgements The authors would like to thank Prof. T. Poggio and B. Caprile for their suggestions. G. T. would like to thank warmly E. Onofri and V. Fateev for stimulating discussions. 7 (b) we can eeciently compute the values of at points xj = 2 n j 4K(m) (21) using a Fast Fourier transform algorithm 14], (c) the derivative terms contained in H1 and H2 can be computed directly in the Fourier Space, because, from Eq. 20, we have d dx (x) = n X j=1 wj 2 4K(m) jj(x) (22) This allows the fast computation of H1 and H2 without evaluating the matrix representations of H1 and H2, 2. Given the set fmig such that 0 < m1 < m2; ::::: < mk 1, compute (x;mk) with algorithm of Fig. 1.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Steffensen-like Methods with Memory for Solving Nonlinear Equations with the Highest Possible Efficiency Indices

The primary goal of this work is to introduce two adaptive Steffensen-like methods with memory of the highest efficiency indices. In the existing methods, to improve the convergence order applied to memory concept, the focus has only been on the current and previous iteration. However, it is possible to improve the accelerators. Therefore, we achieve superior convergence orders and obtain as hi...

متن کامل

First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization

This paper studies empirical risk minimization (ERM) problems for large-scale datasets and incorporates the idea of adaptive sample size methods to improve the guaranteed convergence bounds for first-order stochastic and deterministic methods. In contrast to traditional methods that attempt to solve the ERM problem corresponding to the full dataset directly, adaptive sample size schemes start w...

متن کامل

Adaptive Tunable Vibration Absorber using Shape Memory Alloy

This study presents a new approach to control the nonlinear dynamics of an adaptive absorber using shape memory alloy (SMA) element. Shape memory alloys are classified as smart materials that can remember their original shape after deformation. Stress and temperature-induced phase transformations are two typical behaviors of shape memory alloys. Changing the stiffness associated with phase tran...

متن کامل

Structure of an Adaptive with Memory Method with Efficiency Index 2

The current research develops derivative-free family with memory methods with 100% improvement in the order of convergence. They have three parameters. The parameters are approximated and increase the convergence order from 4 to 6, 7, 7.5 and 8, respectively. Additionally, the new self-accelerating parameters are constructed by a new way. They have the properties of simple structures and they a...

متن کامل

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed pr...

متن کامل

Adaptive Stochastic Gradient Descent on the Grassmannian for Robust Low-Rank Subspace Recovery

In this paper, we present GASG21 (Grassmannian Adaptive Stochastic Gradient for L2,1 norm minimization), an adaptive stochastic gradient algorithm to robustly recover the low-rank subspace from a large matrix. In the presence of column outliers corruption, we reformulate the classical matrix L2,1 norm minimization problem as its stochastic programming counterpart. For each observed data vector,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1992